2
The Fact-Checker Agent (ReAct)
PolyU COMP5511 Lab 12 | 2026-04-13
00:00

The Problem with Blind Actions

When solving complex logic problems or retrieving multiple pieces of information, forcing an AI to immediately decide on an Action can lead to disastrous errors.

Think about how a human solves a jigsaw puzzle:

  • They don't just grab a piece randomly and jam it into the board.
  • They mentally or verbally map out a strategy: "First, I need to find the flat edge pieces for the border, then I'll see if this blue piece fits."

Similarly, if we prompt an AI to only output an Action—skipping the reasoning phase—it is "acting blindly". It will often guess, hallucinate, or pick the wrong tool entirely because it hasn't mapped out the prerequisites to answer the query.

Thinking Out Loud
Language models construct logic token-by-token. By forcing the AI to "think out loud" (generate text analyzing the problem) before it selects an action, we provide it the necessary workspace to arrive at the correct logical conclusion.
A conceptual illustration contrasting two AI brains. On the left, a robotic brain acting impulsively with a warning sign and error symbols. On the right, a glowing robotic brain with a thought bubble filled with puzzle pieces and step-by-step logic, successfully assembling a puzzle. Minimalist tech aesthetic with light blue, soft orange, and red accent colors.